359 research outputs found

    Representing moisture fluxes and phase changes in glacier debris cover using a reservoir approach

    Get PDF
    Due to the complexity of treating moisture in supraglacial debris, surface energy balance models to date have neglected moisture infiltration and phase changes in the debris layer. The latent heat flux (QL) is also often excluded due to the uncertainty in determining the surface vapour pressure. To quantify the importance of moisture on the surface energy and climatic mass balance (CMB) of debris-covered glaciers, we developed a simple reservoir parameterization for the debris ice and water content, as well as an estimation of the latent heat flux. The parameterization was incorporated into a CMB model adapted for debris-covered glaciers. We present the results of two point simulations, using both our new “moist” and the conventional “dry” approaches, on the Miage Glacier, Italy, during summer 2008 and fall 2011. The former year coincides with available in situ glaciological and meteorological measurements, including the first eddy-covariance measurements of the turbulent fluxes over supraglacial debris, while the latter contains two refreeze events that permit evaluation of the influence of phase changes. The simulations demonstrate a clear influence of moisture on the glacier energy and mass-balance dynamics. When water and ice are considered, heat transmission to the underlying glacier ice is lower, as the effective thermal diffusivity of the saturated debris layers is reduced by increases in both the density and the specific heat capacity of the layers. In combination with surface heat extraction by QL, subdebris ice melt is reduced by 3.1% in 2008 and by 7.0% in 2011 when moisture effects are included. However, the influence of the parameterization on the total accumulated mass balance varies seasonally. In summer 2008, mass loss due to surface vapour fluxes more than compensates for the reduction in ice melt, such that the total ablation increases by 4.0 %. Conversely, in fall 2011, the modulation of basal debris temperature by debris ice results in a decrease in total ablation of 2.1 %. Although the parameterization is a simplified representation of the moist physics of glacier debris, it is a novel attempt at including moisture in a numerical model of debris-covered glaciers and one that opens up additional avenues for future research

    A portfolio approach to massively parallel Bayesian optimization

    Full text link
    One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multiobjective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster

    A standardised sampling protocol for robust assessment of reach-scale fish community diversity in wadeable New Zealand streams

    Get PDF
    The New Zealand fish fauna contains species that are affected not only by river system connectivity, but also by catchment and local-scale changes in landcover, water quality and habitat quality. Consequently, native fish have potential as multi-scale bioindicators of human pressure on stream ecosystems, yet no standardised, repeatable and scientifically defensible methods currently exist for effectively quantifying their abundance or diversity in New Zealand stream reaches. Here we report on the testing of a back-pack electrofishing method, modified from that used by the United States Environmental Protection Agency, on a wide variety of wadeable stream reaches throughout New Zealand. Seventy-three first- to third-order stream reaches were fished with a single pass over 150-345 m length. Time taken to sample a reach using single-pass electrofishing ranged from 1-8 h. Species accumulation curves indicated that, irrespective of location, continuous sampling of 150 stream metres is required to accurately describe reach-scale fish species richness using this approach. Additional species detection beyond 150 m was rare (<10%) with a single additional species detected at only two out of the 17 reaches sampled beyond this distance. A positive relationship was also evident between species detection and area fished, although stream length rather than area appeared to be the better predictor. The method tested provides a standardised and repeatable approach for regional and/or national reporting on the state of New Zealand's freshwater fish communities and trends in richness and abundance over time

    A portfolio approach to massively parallel Bayesian optimization

    Get PDF
    One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multi-objective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster

    Characterization and valuation of uncertainty of calibrated parameters in stochastic decision models

    Full text link
    We evaluated the implications of different approaches to characterize uncertainty of calibrated parameters of stochastic decision models (DMs) in the quantified value of such uncertainty in decision making. We used a microsimulation DM of colorectal cancer (CRC) screening to conduct a cost-effectiveness analysis (CEA) of a 10-year colonoscopy screening. We calibrated the natural history model of CRC to epidemiological data with different degrees of uncertainty and obtained the joint posterior distribution of the parameters using a Bayesian approach. We conducted a probabilistic sensitivity analysis (PSA) on all the model parameters with different characterizations of uncertainty of the calibrated parameters and estimated the value of uncertainty of the different characterizations with a value of information analysis. All analyses were conducted using high performance computing resources running the Extreme-scale Model Exploration with Swift (EMEWS) framework. The posterior distribution had high correlation among some parameters. The parameters of the Weibull hazard function for the age of onset of adenomas had the highest posterior correlation of -0.958. Considering full posterior distributions and the maximum-a-posteriori estimate of the calibrated parameters, there is little difference on the spread of the distribution of the CEA outcomes with a similar expected value of perfect information (EVPI) of \$653 and \$685, respectively, at a WTP of \$66,000/QALY. Ignoring correlation on the posterior distribution of the calibrated parameters, produced the widest distribution of CEA outcomes and the highest EVPI of \$809 at the same WTP. Different characterizations of uncertainty of calibrated parameters have implications on the expect value of reducing uncertainty on the CEA. Ignoring inherent correlation among calibrated parameters on a PSA overestimates the value of uncertainty.Comment: 17 pages, 6 figures, 3 table

    Trajectory-oriented optimization of stochastic epidemiological models

    Full text link
    Epidemiological models must be calibrated to ground truth for downstream tasks such as producing forward projections or running what-if scenarios. The meaning of calibration changes in case of a stochastic model since output from such a model is generally described via an ensemble or a distribution. Each member of the ensemble is usually mapped to a random number seed (explicitly or implicitly). With the goal of finding not only the input parameter settings but also the random seeds that are consistent with the ground truth, we propose a class of Gaussian process (GP) surrogates along with an optimization strategy based on Thompson sampling. This Trajectory Oriented Optimization (TOO) approach produces actual trajectories close to the empirical observations instead of a set of parameter settings where only the mean simulation behavior matches with the ground truth

    Multiannual observations and modelling of seasonal thermal profiles through supraglacial debris in the Central Himalaya

    Get PDF
    Many glaciers in the Central Himalaya are covered with rock debris that modifies the transfer of heat from the atmosphere to the underlying ice. These debris-covered glaciers are experiencing rapid mass loss at rates that have accelerated during the last two decades. Quantifying recent and future glacier mass change requires understanding the relationship between debris thickness and ablation particularly through the summer monsoon season. We present air, near-surface and debris temperatures measured during three monsoon seasons at five sites on Khumbu Glacier in Nepal, and compare these results to similar measurements from two other debris-covered glaciers in this region. Seasonal debris temperature profiles are approximately linear and consistent between sites for thick (>?0.5?m) and thin (<?0.5?m) debris across thicknesses ranging from 0.26 to 2.0?m. The similarities between these multiannual data imply that they are representative of supraglacial debris layers in the monsoon-influenced Himalaya more generally. We compare three methods to calculate sub-debris ablation, including using our temperature measurements with a thermal diffusion model that incorporates a simplified treatment of debris moisture. Estimated ablation between 3 June and 11 October at around 5000?m above sea level ranged from 0.10?m water equivalent beneath 1.5?m of debris to 0.47?m water equivalent beneath 0.3?m debris. However, these values are small when compared to remotely observed rates of surface lowering, suggesting that mass loss from these debris-covered glaciers is greatly enhanced by supraglacial and englacial processes that locally amplify ablationauthorsversionPeer reviewe

    Modeling hepatitis C micro-elimination among people who inject drugs with direct-acting antivirals in metropolitan Chicago

    Get PDF
    Hepatitis C virus (HCV) infection is a leading cause of chronic liver disease and mortality worldwide. Direct-acting antiviral (DAA) therapy leads to high cure rates. However, persons who inject drugs (PWID) are at risk for reinfection after cure and may require multiple DAA treatments to reach the World Health Organization’s (WHO) goal of HCV elimination by 2030. Using an agent-based model (ABM) that accounts for the complex interplay of demographic factors, risk behaviors, social networks, and geographic location for HCV transmission among PWID, we examined the combination(s) of DAA enrollment (2.5%, 5%, 7.5%, 10%), adherence (60%, 70%, 80%, 90%) and frequency of DAA treatment courses needed to achieve the WHO’s goal of reducing incident chronic infections by 90% by 2030 among a large population of PWID from Chicago, IL and surrounding suburbs. We also estimated the economic DAA costs associated with each scenario. Our results indicate that a DAA treatment rate of >7.5% per year with 90% adherence results in 75% of enrolled PWID requiring only a single DAA course; however 19% would require 2 courses, 5%, 3 courses and <2%, 4 courses, with an overall DAA cost of $325 million to achieve the WHO goal in metropolitan Chicago. We estimate a 28% increase in the overall DAA cost under low adherence (70%) compared to high adherence (90%). Our modeling results have important public health implications for HCV elimination among U.S. PWID. Using a range of feasible treatment enrollment and adherence rates, we report robust findings supporting the need to address re-exposure and reinfection among PWID to reduce HCV incidence

    Impact of debris cover on glacier ablation and atmosphere - glacier feedbacks in the Karakoram

    Get PDF
    This work was partly carried out under the Collaborative Adaptation Research Initiative in Africa and Asia (CARIAA) with financial support from the UK Government’s Department for International Development and the International Development Research Centre, Ottawa, Canada.The Karakoram range of the Hindu-Kush Himalaya is characterized by both extensive glaciation and a widespread prevalence of surficial debris cover on the glaciers. Surface debris exerts a strong control on glacier surface-energy and mass fluxes and, by modifying surface boundary conditions, has the potential to alter atmosphere– glacier feedbacks. To date, the influence of debris on Karakoram glaciers has only been directly assessed by a small number of glaciological measurements over short periods. Here, we include supraglacial debris in a high-resolution, interactively coupled atmosphere–glacier modeling system. To investigate glaciological and meteorological changes that arise due to the presence of debris, we perform two simulations using the coupled model from 1 May to 1 October 2004: one that treats all glacier surfaces as debris-free and one that introduces a simplified specification for the debris thickness. The basin-averaged impact of debris is a reduction in ablation of 14 %, although the difference exceeds 5mw:e: on the lowest-altitude glacier tongues. The relatively modest reduction in basin-mean mass loss results in part from non-negligible sub-debris melt rates under thicker covers and from compensating increases in melt under thinner debris, and may help to explain the lack of distinct differences in recent elevation changes between clean and debriscovered ice. The presence of debris also strongly alters the surface boundary condition and thus heat exchanges with the atmosphere; near-surface meteorological fields at lower elevations and their vertical gradients; and the atmospheric boundary layer development. These findings are relevant for glacio-hydrological studies on debris-covered glaciers and contribute towards an improved understanding of glacier behavior in the Karakoram
    corecore